The Fundamental Nature of the Log Loss Function

نویسنده

  • Vladimir Vovk
چکیده

The standard loss functions used in the literature on probabilistic prediction are the log loss function and the Brier loss function; however, any proper loss function can be used for comparison of prediction algorithms. This note shows that the log loss function is most selective in that any prediction algorithm that is optimal for a given data sequence (in the sense of the algorithmic theory of randomness) under the log loss function will be optimal under any computable proper mixable loss function; on the other hand, there is a data sequence and a prediction algorithm that is optimal for that sequence under the Brier loss function but not under the log loss function.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Minimax Estimator of a Lower Bounded Parameter of a Discrete Distribution under a Squared Log Error Loss Function

The problem of estimating the parameter ?, when it is restricted to an interval of the form , in a class of discrete distributions, including Binomial Negative Binomial discrete Weibull and etc., is considered. We give necessary and sufficient conditions for which the Bayes estimator of with respect to a two points boundary supported prior is minimax under squared log error loss function....

متن کامل

Bayes, E-Bayes and Robust Bayes Premium Estimation and Prediction under the Squared Log Error Loss Function

In risk analysis based on Bayesian framework, premium calculation requires specification of a prior distribution for the risk parameter in the heterogeneous portfolio. When the prior knowledge is vague, the E-Bayesian and robust Bayesian analysis can be used to handle the uncertainty in specifying the prior distribution by considering a class of priors instead of a single prior. In th...

متن کامل

Admissibility in a One Parameter Non-regular Family with Squared-log Error Loss Function

‎Consider an estimation problem in a one-parameter non-regular distribution when both endpoints of the support depend on a single parameter‎. ‎In this paper‎, ‎we give sufficient conditions for a generalized Bayes estimator of a parametric function to be admissible‎. ‎Some examples are given‎. ‎

متن کامل

Admissible and Minimax Estimator of the Parameter $theta$ in a Binomial $Bin( n ,theta)$ ­distribution under Squared Log Error Loss Function in a Lower Bounded Parameter Space

Extended Abstract. The study of truncated parameter space in general is of interest for the following reasons: 1.They often occur in practice. In many cases certain parameter values can be excluded from the parameter space. Nearly all problems in practice have a truncated parameter space and it is most impossible to argue in practice that a parameter is not bounded. In truncated parameter...

متن کامل

Minimax Estimation of the Scale Parameter in a Family of Transformed Chi-Square Distributions under Asymmetric Squared Log Error and MLINEX Loss Functions

This paper is concerned with the problem of finding the minimax estimators of the scale parameter ? in a family of transformed chi-square distributions, under asymmetric squared log error (SLE) and modified linear exponential (MLINEX) loss functions, using the Lehmann Theorem [2]. Also we show that the results of Podder et al. [4] for Pareto distribution are a special case of our results for th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015